A communication protocol is a system of rules that allows two or more entities of a communications system to transmit information via any variation of a physical quantity. The protocol defines the rules, syntax, semantics, and synchronization of communication and possible error recovery methods. Protocols may be implemented by hardware, software, or a combination of both.
Communicating systems use well-defined formats for exchanging various messages. Each message has an exact meaning intended to elicit a response from a range of possible responses predetermined for that particular situation. The specified behavior is typically independent of how it is to be Implementation. Communication protocols have to be agreed upon by the parties involved. To reach an agreement, a protocol may be developed into a technical standard. A programming language describes the same for computations, so there is a close analogy between protocols and programming languages: protocols are to communication what programming languages are to computations.Comer 2000, Sect. 11.2 - The Need For Multiple Protocols, p. 177, "They (protocols) are to communication what programming languages are to computation" An alternate formulation states that protocols are to communication what are to computation.Comer 2000, Sect. 1.3 - Internet Services, p. 3, "Protocols are to communication what algorithms are to computation"
Multiple protocols often describe different aspects of a single communication. A group of protocols designed to work together is known as a protocol suite; when implemented in software they are a protocol stack.
Internet communication protocols are published by the Internet Engineering Task Force (IETF). The IEEE (Institute of Electrical and Electronics Engineers) handles wired and wireless networking and the International Organization for Standardization (ISO) handles other types. The ITU-T handles telecommunications protocols and formats for the public switched telephone network (PSTN). As the PSTN and Internet converge, the standards are also being driven towards convergence.
On the ARPANET, the starting point for host-to-host communication in 1969 was the 1822 protocol, written by Bob Kahn, which defined the transmission of messages to an IMP. The Network Control Program (NCP) for the ARPANET, developed by Steve Crocker and other graduate students including Jon Postel and Vint Cerf, was first implemented in 1970. The NCP interface allowed application software to connect across the ARPANET by implementing higher-level communication protocols, an early example of the protocol layering concept.
The CYCLADES network, designed by Louis Pouzin in the early 1970s was the first to implement the end-to-end principle, and make the hosts responsible for the reliable delivery of data on a packet-switched network, rather than this being a service of the network itself. His team was the first to tackle the highly complex problem of providing user applications with a reliable virtual circuit service while using a best-effort service, an early contribution to what will be the Transmission Control Protocol (TCP).
Bob Metcalfe and others at Xerox PARC outlined the idea of Ethernet and the PARC Universal Packet (PUP) for internetworking.
Research in the early 1970s by Bob Kahn and Vint Cerf led to the formulation of the Transmission Control Program (TCP). Its specification was written by Cerf with Yogen Dalal and Carl Sunshine in December 1974, still a monolithic design at this time.
The International Network Working Group agreed on a connectionless datagram standard which was presented to the CCITT in 1975 but was not adopted by the CCITT nor by the ARPANET. Separate international research, particularly the work of Rémi Després, contributed to the development of the X.25 standard, based on , which was adopted by the CCITT in 1976. Computer manufacturers developed proprietary protocols such as IBM's Systems Network Architecture (SNA), Digital Equipment Corporation's DECnet and Xerox Network Systems.
TCP software was redesigned as a modular protocol stack, referred to as TCP/IP. This was installed on SATNET in 1982 and on the ARPANET in January 1983. The development of a complete Internet protocol suite by 1989, as outlined in and , laid the foundation for the growth of TCP/IP as a comprehensive protocol suite as the core component of the emerging Internet.
International work on a reference model for communication standards led to the OSI model, published in 1984. For a period in the late 1980s and early 1990s, engineers, organizations and nations became Protocol Wars, the OSI model or the Internet protocol suite, would result in the best and most robust computer networks.
Operating systems usually contain a set of cooperating processes that manipulate shared data to communicate with each other. This communication is governed by well-understood protocols, which can be embedded in the process code itself.Ben-Ari 1982, chapter 2 - The concurrent programming abstraction, p. 18-19, states the same.Ben-Ari 1982, Section 2.7 - Summary, p. 27, summarizes the concurrent programming abstraction. In contrast, because there is no shared memory, communicating systems have to communicate with each other using a shared transmission medium. Transmission is not necessarily reliable, and individual systems may use different hardware or operating systems.
To implement a networking protocol, the protocol software modules are interfaced with a framework implemented on the machine's operating system. This framework implements the networking functionality of the operating system. When protocol algorithms are expressed in a portable programming language the protocol software may be made operating system independent. The best-known frameworks are the TCP/IP model and the OSI model.
At the time the Internet was developed, abstraction layering had proven to be a successful design approach for both compiler and operating system design and, given the similarities between programming languages and communication protocols, the originally monolithic networking programs were decomposed into cooperating protocols.Comer 2000, Sect. 11.2 - The Need For Multiple Protocols, p. 177, explains this by drawing analogies between computer communication and programming languages. This gave rise to the concept of layered protocols which nowadays forms the basis of protocol design.Sect. 11.10 - The Disadvantage Of Layering, p. 192, states: layering forms the basis for protocol design.
Systems typically do not use a single protocol to handle a transmission. Instead they use a set of cooperating protocols, sometimes called a protocol suite.Comer 2000, Sect. 11.2 - The Need For Multiple Protocols, p. 177, states the same. Some of the best-known protocol suites are TCP/IP, IPX/SPX, X.25, AX.25 and AppleTalk.
The protocols can be arranged based on functionality in groups, for instance, there is a group of transport protocols. The functionalities are mapped onto the layers, each layer solving a distinct class of problems relating to, for instance: application-, transport-, internet- and network interface-functions.Comer 2000, Sect. 11.3 - The Conceptual Layers Of Protocol Software, p. 178, "Each layer takes responsibility for handling one part of the problem." To transmit a message, a protocol has to be selected from each layer. The selection of the next protocol is accomplished by extending the message with a protocol selector for each layer.Comer 2000, Sect. 11.11 - The Basic Idea Behind Multiplexing And Demultiplexing, p. 192, states the same.
The immediate human readability stands in contrast to native binary protocols which have inherent benefits for use in a computer environment (such as ease of mechanical parsing and improved bandwidth utilization).
Network applications have various methods of encapsulating data. One method very common with Internet protocols is a text oriented representation that transmits requests and responses as lines of ASCII text, terminated by a newline character (and usually a carriage return character). Examples of protocols that use plain, human-readable text for its commands are FTP (File Transfer Protocol), SMTP (Simple Mail Transfer Protocol), early versions of HTTP (Hypertext Transfer Protocol), and the finger protocol.
Text-based protocols are typically optimized for human parsing and interpretation and are therefore suitable whenever human inspection of protocol contents is required, such as during debugging and during early protocol development design phases.
Binary have been used in the normative documents describing modern standards like EbXML, HTTP/2, HTTP/3 and EDOC. An interface in UML may also be considered a binary protocol.
Messages are sent and received on communicating systems to establish communication. Protocols should therefore specify rules governing the transmission. In general, much of the following should be addressed:Marsden 1986, Chapter 3 - Fundamental protocol concepts and problem areas, p. 26-42, explains much of the following.
Communicating systems operate concurrently. An important aspect of concurrent programming is the synchronization of software for receiving and transmitting messages of communication in proper sequencing. Concurrent programming has traditionally been a topic in operating systems theory texts.Ben-Ari 1982, in his preface, p. xiii. Formal verification seems indispensable because concurrent programs are notorious for the hidden and sophisticated bugs they contain.Ben-Ari 1982, in his preface, p. xiv. A mathematical approach to the study of concurrency and communication is referred to as communicating sequential processes (CSP).Hoare 1985, Chapter 4 - Communication, p. 133, deals with communication. Concurrency can also be modeled using finite-state machines, such as Mealy machine and . Mealy and Moore machines are in use as design tools in digital electronics systems encountered in the form of hardware used in telecommunication or electronic devices in general.
The literature presents numerous analogies between computer communication and programming. In analogy, a transfer mechanism of a protocol is comparable to a central processing unit (CPU). The framework introduces rules that allow the programmer to design cooperating protocols independently of one another.
The communication protocols in use on the Internet are designed to function in diverse and complex settings. Internet protocols are designed for simplicity and modularity and fit into a coarse hierarchy of functional layers defined in the Internet Protocol Suite. The first two cooperating protocols, the Transmission Control Protocol (TCP) and the Internet Protocol (IP) resulted from the decomposition of the original Transmission Control Program, a monolithic communication protocol, into this layered communication suite.
The OSI model was developed internationally based on experience with networks that predated the internet as a reference model for general communication with much stricter rules of protocol interaction and rigorous layering.
Typically, application software is built upon a robust data transport layer. Underlying this transport layer is a datagram delivery and routing mechanism that is typically connectionless in the Internet. Packet relaying across networks happens over another layer that involves only network link technologies, which are often specific to certain physical layer technologies, such as Ethernet. Layering provides opportunities to exchange technologies when needed, for example, protocols are often stacked in a tunneling arrangement to accommodate the connection of dissimilar networks. For example, IP may be tunneled across an Asynchronous Transfer Mode (ATM) network.
Computations deal with algorithms and data; Communication involves protocols and messages; So the analog of a data flow diagram is some kind of message flow diagram. To visualize protocol layering and protocol suites, a diagram of the message flows in and between two systems, A and B, is shown in figure 3. The systems, A and B, both make use of the same protocol suite. The vertical flows (and protocols) are in-system and the horizontal message flows (and protocols) are between systems. The message flows are governed by rules, and data formats specified by protocols. The blue lines mark the boundaries of the (horizontal) protocol layers.
To send a message on system A, the top-layer software module interacts with the module directly below it and hands over the message to be encapsulated. The lower module fills in the header data in accordance with the protocol it implements and interacts with the bottom module which sends the message over the communications channel to the bottom module of system B. On the receiving system B the reverse happens, so ultimately the message gets delivered in its original form to the top module of system B.Comer 2000, Sect. 11.3 - The Conceptual Layers Of Protocol Software, p. 179, the first two paragraphs describe the sending of a message through successive layers.
Program translation is divided into subproblems. As a result, the translation software is layered as well, allowing the software layers to be designed independently. The same approach can be seen in the TCP/IP layering.Comer 2000, Sect. 11.2 - The need for multiple protocols, p. 178, explains similarities protocol software and compiler, assembler, linker, loader.
The modules below the application layer are generally considered part of the operating system. Passing data between these modules is much less expensive than passing data between an application program and the transport layer. The boundary between the application layer and the transport layer is called the operating system boundary.Comer 2000, Sect. 11.9.1 - Operating System Boundary, p. 192, describes the operating system boundary.
Although the use of protocol layering is today ubiquitous across the field of computer networking, it has been historically criticized by many researchers as abstracting the protocol stack in this way may cause a higher layer to duplicate the functionality of a lower layer, a prime example being error recovery on both a per-link basis and an end-to-end basis.
Finite-state machine models are used to formally describe the possible interactions of the protocol.Comer 2000, Glossary of Internetworking Terms and Abbreviations, p. 704, term protocol. and communicating finite-state machines
Protocol standards are commonly created by obtaining the approval or support of a standards organization, which initiates the standardization process. The members of the standards organization agree to adhere to the work result on a voluntary basis. Often the members are in control of large market shares relevant to the protocol and in many cases, standards are enforced by law or the government because they are thought to serve an important public interest, so getting approval can be very important for the protocol.
In some cases, protocols gain market dominance without going through a standardization process. Such protocols are referred to as de facto standards. De facto standards are common in emerging markets, niche markets, or markets that are monopolized (or oligopoly). They can hold a market in a very negative grip, especially when used to scare away competition. From a historical perspective, standardization should be seen as a measure to counteract the ill-effects of de facto standards. Positive exceptions exist; a de facto standard operating system like Linux does not have this negative grip on its market, because the sources are published and maintained in an open way, thus inviting competition.
International standards organizations are supposed to be more impartial than local organizations with a national or commercial self-interest to consider. Standards organizations also do research and development for standards of the future. In practice, the standards organizations mentioned, cooperate closely with each other.Marsden 1986, Section 6.3 - Advantages of standardization, p. 66-67, states the same.
Multiple standards bodies may be involved in the development of a protocol. If they are uncoordinated, then the result may be multiple, incompatible definitions of a protocol, or multiple, incompatible interpretations of messages; important invariants in one definition (e.g., that time-to-live values are monotone decreasing to prevent stable routing loops) may not be respected in another.
In the OSI model, communicating systems are assumed to be connected by an underlying physical medium providing a basic transmission mechanism. The layers above it are numbered. Each layer provides service to the layer above it using the services of the layer immediately below it. The top layer provides services to the application process. The layers communicate with each other by means of an interface, called a service access point. Corresponding layers at each system are called peer entities. To communicate, two peer entities at a given layer use a protocol specific to that layer which is implemented by using services of the layer below.Marsden 1986, Section 14.3 - Layering concepts and general definitions, p. 183-185, explains terminology. For each layer, there are two types of standards: protocol standards defining how peer entities at a given layer communicate, and service standards defining how a given layer communicates with the layer above it.
In the OSI model, the layers and their functionality are (from highest to lowest layer):
In contrast to the TCP/IP layering scheme, which assumes a connectionless network, RM/OSI assumed a connection-oriented network. Connection-oriented networks are more suitable for wide area networks and connectionless networks are more suitable for local area networks. Connection-oriented communication requires some form of session and (virtual) circuits, hence the (in the TCP/IP model lacking) session layer. The constituent members of ISO were mostly concerned with wide area networks, so the development of RM/OSI concentrated on connection-oriented networks and connectionless networks were first mentioned in an addendum to RM/OSIMarsden 1986, Section 14.11 - Connectionless mode and RM/OSI, p. 195, mentions this. and later incorporated into an update to RM/OSI.
At the time, the IETF had to cope with this and the fact that the Internet needed protocols that simply were not there. As a result, the IETF developed its own standardization process based on "rough consensus and running code".Comer 2000, Section 1.9 - Internet Protocols And Standardization, p. 12, explains why the IETF did not use existing protocols. The standardization process is described by .
Nowadays, the IETF has become a standards organization for the protocols in use on the Internet. RM/OSI has extended its model to include connectionless services and because of this, both TCP and IP could be developed into international standards.
If some portion of the wire image is not cryptographically authenticated, it is subject to modification by intermediate parties (i.e., middleboxes), which can influence protocol operation. Even if authenticated, if a portion is not encrypted, it will form part of the wire image, and intermediate parties may intervene depending on its content (e.g., dropping packets with particular flags). Signals deliberately intended for intermediary consumption may be left authenticated but unencrypted.
The wire image can be deliberately engineered, encrypting parts that intermediaries should not be able to observe and providing signals for what they should be able to. If provided signals are decoupled from the protocol's operation, they may become untrustworthy. Benign network management and research are affected by metadata encryption; protocol designers must balance observability for operability and research against ossification resistance and end-user privacy. The IETF announced in 2014 that it had determined that large-scale surveillance of protocol operations is an attack due to the ability to infer information from the wire image about users and their behaviour, and that the IETF would "work to mitigate pervasive monitoring" in its protocol designs; this had not been done systematically previously. The Internet Architecture Board recommended in 2023 that disclosure of information by a protocol to the network should be intentional, performed with the agreement of both recipient and sender, authenticated to the degree possible and necessary, only acted upon to the degree of its trustworthiness, and minimised and provided to a minimum number of entities. Engineering the wire image and controlling what signals are provided to network elements was a "developing field" in 2023, according to the IAB.
Ossification is a major issue in Internet protocol design and deployment, as it can prevent new protocols or extensions from being deployed on the Internet, or place strictures on the design of new protocols; new protocols may have to be encapsulated in an already-deployed protocol or mimic the wire image of another protocol. Because of ossification, the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) are the only practical choices for transport protocols on the Internet, and TCP itself has significantly ossified, making extension or modification of the protocol difficult.
Recommended methods of preventing ossification include encrypting protocol metadata, and ensuring that extension points are exercised and wire image variability is exhibited as fully as possible; remedying existing ossification requires coordination across protocol participants. QUIC is the first IETF transport protocol to have been designed with deliberate anti-ossification properties.
A layering scheme combines both function and domain of use. The dominant layering schemes are the ones developed by the IETF and by ISO. Despite the fact that the underlying assumptions of the layering schemes are different enough to warrant distinguishing the two, it is a common practice to compare the two by relating common protocols to the layers of the two schemes.Comer 2000, Sect. 11.5.1 - The TCP/IP 5-Layer Reference Model, p. 183, states the same. The layering scheme from the IETF is called Internet layering or TCP/IP layering. The layering scheme from ISO is called the OSI model or ISO layering.
In networking equipment configuration, a term-of-art distinction is often drawn: The term protocol strictly refers to the transport layer, and the term service refers to protocols utilizing a protocol for transport. In the common case of TCP and UDP, services are distinguished by port numbers. Conformance to these port numbers is voluntary, so in content inspection systems the term service strictly refers to port numbers, and the term application is often used to refer to protocols identified through inspection signatures.
|
|